# BERT Architecture Optimization
Language Detection
MIT
BERT-based multilingual detection model supporting text classification for 200 languages
Text Classification Supports Multiple Languages
L
alexneakameni
1,210
1
Layoutlm Wikipedia Ja
This is a LayoutLM model pre-trained on Japanese text, primarily used for token classification tasks in Japanese documents.
Large Language Model
Transformers Japanese

L
jri-advtechlab
22
1
Environmentalbert Biodiversity
Apache-2.0
A biodiversity text classification model fine-tuned based on EnvironmentalBERT-base, specializing in ESG/nature-related biodiversity text detection
Large Language Model
Transformers English

E
ESGBERT
101
5
M2 Bert 80M 32k Retrieval
Apache-2.0
This is an 80M parameter M2-BERT pre-trained model, supporting sequences up to 32,768 tokens in length, specifically optimized for long-context retrieval tasks.
Text Embedding
Transformers English

M
togethercomputer
1,274
129
Ghisbert
MIT
GHisBERT is a BERT-based model, specifically trained from scratch for historical German data, covering all documented developmental stages of the German language.
Large Language Model
Transformers

G
christinbeck
37
4
Bert Addresses
A BERT-based named entity recognition model specialized in labeling person names, organization names, and US address information
Sequence Labeling
Transformers

B
ctrlbuzz
3,284
8
Luke Japanese Wordpiece Base
Apache-2.0
A LUKE model improved from Japanese BERT, specifically optimized for Japanese named entity recognition tasks
Sequence Labeling
Transformers Japanese

L
uzabase
16
4
Econobert
Apache-2.0
EconoBert is a model fine-tuned on economics domain datasets based on bert-base-uncased, suitable for NLP tasks in economics, political science, and finance.
Large Language Model
Transformers English

E
samchain
78
5
Geolm Base Toponym Recognition
GeoLM is a language model designed for detecting toponyms from sentences. It is pre-trained on global OpenStreetMap, WikiData, and Wikipedia data and fine-tuned on the GeoWebNews dataset.
Sequence Labeling
Transformers English

G
zekun-li
186
6
Nezha Cn Base
NEZHA is a neural contextualized representation model for Chinese understanding, based on the Transformer architecture, developed by Huawei Noah's Ark Lab.
Large Language Model
Transformers

N
sijunhe
1,443
12
Bert Ancient Chinese
Apache-2.0
This is a Chinese pre-trained language model based on the BERT architecture, supporting both classical and modern Chinese text processing.
Large Language Model
Transformers Chinese

B
Jihuai
625
25
Arabertmo Base V10
AraBERTMo is an Arabic pre-trained language model based on Google's BERT architecture, supporting fill-mask tasks.
Large Language Model
Transformers

A
Ebtihal
39
0
Roberta Base
A RoBERTa model pretrained on Korean, suitable for various Korean natural language processing tasks.
Large Language Model
Transformers Korean

R
klue
1.2M
33
Bert Medium Arabic
Pre-trained Arabic BERT medium language model, trained on approximately 8.2 billion words of Arabic text resources
Large Language Model Arabic
B
asafaya
66
0
Muril Adapted Local
Apache-2.0
MuRIL is an open-source BERT model pretrained on 17 Indian languages and their transliterated versions by Google, supporting multilingual representations.
Large Language Model Supports Multiple Languages
M
monsoon-nlp
24
2
Klue Bert Base Aihub Mrc
Korean machine reading comprehension model fine-tuned from KLUE BERT-base, trained using AIHub dataset
Question Answering System
Transformers Korean

K
bespin-global
29
1
Bert Base Arabic Camelbert Da Sentiment
Apache-2.0
A sentiment analysis model fine-tuned based on the CAMeLBERT dialect Arabic model, supporting Arabic text sentiment classification
Text Classification
Transformers Arabic

B
CAMeL-Lab
26.07k
44
Arabertmo Base V3
AraBERTMo is an Arabic pre-trained language model based on Google's BERT architecture, supporting fill-mask tasks.
Large Language Model
Transformers Arabic

A
Ebtihal
15
0
Arabertmo Base V2
Arabic pre-trained language model based on BERT architecture, supporting masked language modeling tasks
Large Language Model
Transformers Arabic

A
Ebtihal
17
0
Dehatebert Mono German
Apache-2.0
This model is fine-tuned based on multilingual BERT, specifically designed for detecting hate speech in German, using monolingual (English) training data.
Text Classification German
D
Hate-speech-CNERG
300
3
Arabertmo Base V4
AraBERTMo is an Arabic pre-trained language model based on BERT architecture, supporting masked language modeling tasks.
Large Language Model
Transformers Arabic

A
Ebtihal
15
0
Featured Recommended AI Models